You are here: Artificial Intelligence > Deep Learning > Evaluating and Comparing Deep Models

Evaluating and Comparing Deep Models

To help evaluate Deep Learning segmentation quality, this software release features a model evaluation tool that lets you compare different models with selected metrics, such as binary cross entropy, mean absolute error, and Poisson for regression, as well as Jacard similarity, accuracy, and Dice for binary segmentation.

Choose Artificial Intelligence > Deep Learning Model Evaluation Tool on the menu bar to the open the Model Evaluation dialog, shown below.

Model Evaluation dialog

Model Evaluation dialog

The models displayed and the available metrics are filtered automatically to those matching the selected number of training sets, and the type of output.

 

 

Dragonfly Help Live Version